Self-Supervised Learning for Specified Latent Representation
نویسندگان
چکیده
منابع مشابه
Latent Supervised Learning.
A new machine learning task is introduced, called latent supervised learning, where the goal is to learn a binary classifier from continuous training labels which serve as surrogates for the unobserved class labels. A specific model is investigated where the surrogate variable arises from a two-component Gaussian mixture with unknown means and variances, and the component membership is determin...
متن کاملLatent Semantic Representation Learning for Scene Classification
The performance of machine learning methods is heavily dependent on the choice of data representation. In real world applications such as scene recognition problems, the widely used low-level input features can fail to explain the high-level semantic label concepts. In this work, we address this problem by proposing a novel patchbased latent variable model to integrate latent contextual represe...
متن کاملUsing Both Latent and Supervised Shared Topics for Multitask Learning
This paper introduces two new frameworks, Doubly Supervised Latent Dirichlet Allocation (DSLDA) and its non-parametric variation (NP-DSLDA), that integrate two different types of supervision: topic labels and category labels. This approach is particularly useful for multitask learning, in which both latent and supervised topics are shared between multiple categories. Experimental results on bot...
متن کاملSupervised Learning for Self-Generating Neural Networks
In this paper, supervised learning for Self-Generating Neural Networks (SGNN) method, which was originally developed for the purpose of unsupervised learning, is discussed. An information analytical method is proposed to assign weights to attributes in the training examples if class information is available. This significantly improves the learning speed and the accuracy of the SGNN classiier. ...
متن کاملSelf-supervised Learning for Spinal MRIs
A significant proportion of patients scanned in a clinical setting have follow-up scans. We show in this work that such longitudinal scans alone can be used as a form of “free” self-supervision for training a deep network. We demonstrate this self-supervised learning for the case of T2-weighted sagittal lumbar Magnetic Resonance Images (MRIs). A Siamese convolutional neural network (CNN) is tra...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Fuzzy Systems
سال: 2020
ISSN: 1063-6706,1941-0034
DOI: 10.1109/tfuzz.2019.2904237